Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
2.
Health Promot Int ; 39(2)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38558241

RESUMO

Although digital health promotion (DHP) technologies for young people are increasingly available in low- and middle-income countries (LMICs), there has been insufficient research investigating whether existing ethical and policy frameworks are adequate to address the challenges and promote the technological opportunities in these settings. In an effort to fill this gap and as part of a larger research project, in November 2022, we conducted a workshop in Cape Town, South Africa, entitled 'Unlocking the Potential of Digital Health Promotion for Young People in Low- and Middle-Income Countries'. The workshop brought together 25 experts from the areas of digital health ethics, youth health and engagement, health policy and promotion and technology development, predominantly from sub-Saharan Africa (SSA), to explore their views on the ethics and governance and potential policy pathways of DHP for young people in LMICs. Using the World Café method, participants contributed their views on (i) the advantages and barriers associated with DHP for youth in LMICs, (ii) the availability and relevance of ethical and regulatory frameworks for DHP and (iii) the translation of ethical principles into policies and implementation practices required by these policies, within the context of SSA. Our thematic analysis of the ensuing discussion revealed a willingness to foster such technologies if they prove safe, do not exacerbate inequalities, put youth at the center and are subject to appropriate oversight. In addition, our work has led to the potential translation of fundamental ethical principles into the form of a policy roadmap for ethically aligned DHP for youth in SSA.


Assuntos
60713 , Política de Saúde , Humanos , Adolescente , África do Sul , Promoção da Saúde
3.
Nat Commun ; 15(1): 1619, 2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38388497

RESUMO

The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77-94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.


Assuntos
Inteligência Artificial , Padrões de Referência , China , Ensaios Clínicos Controlados Aleatórios como Assunto
4.
Patterns (N Y) ; 4(11): 100864, 2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-38035190

RESUMO

Artificial intelligence (AI) tools are of great interest to healthcare organizations for their potential to improve patient care, yet their translation into clinical settings remains inconsistent. One of the reasons for this gap is that good technical performance does not inevitably result in patient benefit. We advocate for a conceptual shift wherein AI tools are seen as components of an intervention ensemble. The intervention ensemble describes the constellation of practices that, together, bring about benefit to patients or health systems. Shifting from a narrow focus on the tool itself toward the intervention ensemble prioritizes a "sociotechnical" vision for translation of AI that values all components of use that support beneficial patient outcomes. The intervention ensemble approach can be used for regulation, institutional oversight, and for AI adopters to responsibly and ethically appraise, evaluate, and use AI tools.

5.
Nat Med ; 29(11): 2929-2938, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37884627

RESUMO

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).


Assuntos
Inteligência Artificial , Atenção à Saúde , Humanos , Consenso , Revisões Sistemáticas como Assunto
6.
J Nucl Med ; 64(12): 1848-1854, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37827839

RESUMO

The development of artificial intelligence (AI) within nuclear imaging involves several ethically fraught components at different stages of the machine learning pipeline, including during data collection, model training and validation, and clinical use. Drawing on the traditional principles of medical and research ethics, and highlighting the need to ensure health justice, the AI task force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks: privacy of data subjects, data quality and model efficacy, fairness toward marginalized populations, and transparency of clinical performance. We provide preliminary recommendations to developers of AI-driven medical devices for mitigating the impact of these risks on patients and populations.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Humanos , Coleta de Dados , Comitês Consultivos , Imagem Molecular
7.
JAMA Netw Open ; 6(9): e2335377, 2023 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-37747733

RESUMO

Importance: Artificial intelligence (AI) has gained considerable attention in health care, yet concerns have been raised around appropriate methods and fairness. Current AI reporting guidelines do not provide a means of quantifying overall quality of AI research, limiting their ability to compare models addressing the same clinical question. Objective: To develop a tool (APPRAISE-AI) to evaluate the methodological and reporting quality of AI prediction models for clinical decision support. Design, Setting, and Participants: This quality improvement study evaluated AI studies in the model development, silent, and clinical trial phases using the APPRAISE-AI tool, a quantitative method for evaluating quality of AI studies across 6 domains: clinical relevance, data quality, methodological conduct, robustness of results, reporting quality, and reproducibility. These domains included 24 items with a maximum overall score of 100 points. Points were assigned to each item, with higher points indicating stronger methodological or reporting quality. The tool was applied to a systematic review on machine learning to estimate sepsis that included articles published until September 13, 2019. Data analysis was performed from September to December 2022. Main Outcomes and Measures: The primary outcomes were interrater and intrarater reliability and the correlation between APPRAISE-AI scores and expert scores, 3-year citation rate, number of Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) low risk-of-bias domains, and overall adherence to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement. Results: A total of 28 studies were included. Overall APPRAISE-AI scores ranged from 33 (low quality) to 67 (high quality). Most studies were moderate quality. The 5 lowest scoring items included source of data, sample size calculation, bias assessment, error analysis, and transparency. Overall APPRAISE-AI scores were associated with expert scores (Spearman ρ, 0.82; 95% CI, 0.64-0.91; P < .001), 3-year citation rate (Spearman ρ, 0.69; 95% CI, 0.43-0.85; P < .001), number of QUADAS-2 low risk-of-bias domains (Spearman ρ, 0.56; 95% CI, 0.24-0.77; P = .002), and adherence to the TRIPOD statement (Spearman ρ, 0.87; 95% CI, 0.73-0.94; P < .001). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.74 to 1.00 for individual items, 0.81 to 0.99 for individual domains, and 0.91 to 0.98 for overall scores. Conclusions and Relevance: In this quality improvement study, APPRAISE-AI demonstrated strong interrater and intrarater reliability and correlated well with several study quality measures. This tool may provide a quantitative approach for investigators, reviewers, editors, and funding organizations to compare the research quality across AI studies for clinical decision support.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Humanos , Reprodutibilidade dos Testes , Aprendizado de Máquina , Relevância Clínica
8.
J Nucl Med ; 64(10): 1509-1515, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37620051

RESUMO

The deployment of artificial intelligence (AI) has the potential to make nuclear medicine and medical imaging faster, cheaper, and both more effective and more accessible. This is possible, however, only if clinicians and patients feel that these AI medical devices (AIMDs) are trustworthy. Highlighting the need to ensure health justice by fairly distributing benefits and burdens while respecting individual patients' rights, the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks that arise during the deployment of AIMD: autonomy of patients and clinicians, transparency of clinical performance and limitations, fairness toward marginalized populations, and accountability of physicians and developers. We provide preliminary recommendations for governing these ethical risks to realize the promise of AIMD for patients and populations.


Assuntos
Medicina Nuclear , Médicos , Humanos , Inteligência Artificial , Comitês Consultivos , Imagem Molecular
9.
Am J Bioeth ; 23(9): 55-56, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37647467
10.
JAMA Netw Open ; 6(5): e2310659, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37126349

RESUMO

Importance: Understanding the views and values of patients is of substantial importance to developing the ethical parameters of artificial intelligence (AI) use in medicine. Thus far, there is limited study on the views of children and youths. Their perspectives contribute meaningfully to the integration of AI in medicine. Objective: To explore the moral attitudes and views of children and youths regarding research and clinical care involving health AI at the point of care. Design, Setting, and Participants: This qualitative study recruited participants younger than 18 years during a 1-year period (October 2021 to March 2022) at a large urban pediatric hospital. A total of 44 individuals who were receiving or had previously received care at a hospital or rehabilitation clinic contacted the research team, but 15 were found to be ineligible. Of the 29 who consented to participate, 1 was lost to follow-up, resulting in 28 participants who completed the interview. Exposures: Participants were interviewed using vignettes on 3 main themes: (1) health data research, (2) clinical AI trials, and (3) clinical use of AI. Main Outcomes and Measures: Thematic description of values surrounding health data research, interventional AI research, and clinical use of AI. Results: The 28 participants included 6 children (ages, 10-12 years) and 22 youths (ages, 13-17 years) (16 female, 10 male, and 3 trans/nonbinary/gender diverse). Mean (SD) age was 15 (2) years. Participants were highly engaged and quite knowledgeable about AI. They expressed a positive view of research intended to help others and had strong feelings about the uses of their health data for AI. Participants expressed appreciation for the vulnerability of potential participants in interventional AI trials and reinforced the importance of respect for their preferences regardless of their decisional capacity. A strong theme for the prospective use of clinical AI was the desire to maintain bedside interaction between the patient and their physician. Conclusions and Relevance: In this study, children and youths reported generally positive views of AI, expressing strong interest and advocacy for their involvement in AI research and inclusion of their voices for shared decision-making with AI in clinical care. These findings suggest the need for more engagement of children and youths in health care AI research and integration.


Assuntos
Inteligência Artificial , Medicina , Humanos , Masculino , Criança , Feminino , Adolescente , Pesquisa Qualitativa , Emoções , Tomada de Decisão Compartilhada
12.
13.
Front Public Health ; 11: 968319, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36908403

RESUMO

In this work, we examine magnetic resonance imaging (MRI) and ultrasound (US) appointments at the Diagnostic Imaging (DI) department of a pediatric hospital to discover possible relationships between selected patient features and no-show or long waiting room time endpoints. The chosen features include age, sex, income, distance from the hospital, percentage of non-English speakers in a postal code, percentage of single caregivers in a postal code, appointment time slot (morning, afternoon, evening), and day of the week (Monday to Sunday). We trained univariate Logistic Regression (LR) models using the training sets and identified predictive (significant) features that remained significant in the test sets. We also implemented multivariate Random Forest (RF) models to predict the endpoints. We achieved Area Under the Receiver Operating Characteristic Curve (AUC) of 0.82 and 0.73 for predicting no-show and long waiting room time endpoints, respectively. The univariate LR analysis on DI appointments uncovered the effect of the time of appointment during the day/week, and patients' demographics such as income and the number of caregivers on the no-shows and long waiting room time endpoints. For predicting no-show, we found age, time slot, and percentage of single caregiver to be the most critical contributors. Age, distance, and percentage of non-English speakers were the most important features for our long waiting room time prediction models. We found no sex discrimination among the scheduled pediatric DI appointments. Nonetheless, inequities based on patient features such as low income and language barrier did exist.


Assuntos
Agendamento de Consultas , Imageamento por Ressonância Magnética , Humanos , Criança , Imageamento por Ressonância Magnética/métodos , Modelos Logísticos , Hospitais , Aprendizado de Máquina
16.
Lancet Child Adolesc Health ; 7(1): 69-76, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36206789

RESUMO

Treatment of anorexia nervosa poses a moral quandary for clinicians, particularly in paediatrics. The challenges of appropriately individualising treatment while balancing prospective benefits against concomitant harms are best highlighted through exploration and discussion of the ethical issues. The purpose of this Viewpoint is to explore the ethical tensions in treating young patients (around ages 10-18 years) with severe anorexia nervosa who are not capable of making treatment-based decisions and describe how harm reduction can reasonably be applied. We propose the term AN-PLUS to refer to the subset of patients with a particularly concerning clinical presentation-poor quality of life, lack of treatment response, medically severe and unstable, and severe symptomatology-who might benefit from a harm reduction approach. From ethics literature, qualitative studies, and our clinical experience, we identify three core ethical themes in making treatment decisions for young people with AN-PLUS: capacity and autonomy, best interests, and person-centred care. Finally, we consider how a harm reduction approach can provide direction for developing a personalised treatment plan that retains a focus on best interests while attempting to mitigate the harms of involuntary treatment. We conclude with recommendations to operationalise a harm reduction approach in young people with AN-PLUS.


Assuntos
Anorexia Nervosa , Humanos , Adolescente , Criança , Anorexia Nervosa/terapia , Qualidade de Vida , Tomada de Decisões
17.
Front Digit Health ; 4: 929508, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36052317

RESUMO

As more artificial intelligence (AI) applications are integrated into healthcare, there is an urgent need for standardization and quality-control measures to ensure a safe and successful transition of these novel tools into clinical practice. We describe the role of the silent trial, which evaluates an AI model on prospective patients in real-time, while the end-users (i.e., clinicians) are blinded to predictions such that they do not influence clinical decision-making. We present our experience in evaluating a previously developed AI model to predict obstructive hydronephrosis in infants using the silent trial. Although the initial model performed poorly on the silent trial dataset (AUC 0.90 to 0.50), the model was refined by exploring issues related to dataset drift, bias, feasibility, and stakeholder attitudes. Specifically, we found a shift in distribution of age, laterality of obstructed kidneys, and change in imaging format. After correction of these issues, model performance improved and remained robust across two independent silent trial datasets (AUC 0.85-0.91). Furthermore, a gap in patient knowledge on how the AI model would be used to augment their care was identified. These concerns helped inform the patient-centered design for the user-interface of the final AI model. Overall, the silent trial serves as an essential bridge between initial model development and clinical trials assessment to evaluate the safety, reliability, and feasibility of the AI model in a minimal risk environment. Future clinical AI applications should make efforts to incorporate this important step prior to embarking on a full-scale clinical trial.

19.
Pediatr Radiol ; 52(11): 2111-2119, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35790559

RESUMO

The integration of human and machine intelligence promises to profoundly change the practice of medicine. The rapidly increasing adoption of artificial intelligence (AI) solutions highlights its potential to streamline physician work and optimize clinical decision-making, also in the field of pediatric radiology. Large imaging databases are necessary for training, validating and testing these algorithms. To better promote data accessibility in multi-institutional AI-enabled radiologic research, these databases centralize the large volumes of data required to effect accurate models and outcome predictions. However, such undertakings must consider the sensitivity of patient information and therefore utilize requisite data governance measures to safeguard data privacy and security, to recognize and mitigate the effects of bias and to promote ethical use. In this article we define data stewardship and data governance, review their key considerations and applicability to radiologic research in the pediatric context, and consider the associated best practices along with the ramifications of poorly executed data governance. We summarize several adaptable data governance frameworks and describe strategies for their implementation in the form of distributed and centralized approaches to data management.


Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Criança , Bases de Dados Factuais , Humanos , Radiologistas , Radiologia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...